10 research outputs found

    Overdetermined independent vector analysis

    Full text link
    We address the convolutive blind source separation problem for the (over-)determined case where (i) the number of nonstationary target-sources KK is less than that of microphones MM, and (ii) there are up to Mβˆ’KM - K stationary Gaussian noises that need not to be extracted. Independent vector analysis (IVA) can solve the problem by separating into MM sources and selecting the top KK highly nonstationary signals among them, but this approach suffers from a waste of computation especially when Kβ‰ͺMK \ll M. Channel reductions in preprocessing of IVA by, e.g., principle component analysis have the risk of removing the target signals. We here extend IVA to resolve these issues. One such extension has been attained by assuming the orthogonality constraint (OC) that the sample correlation between the target and noise signals is to be zero. The proposed IVA, on the other hand, does not rely on OC and exploits only the independence between sources and the stationarity of the noises. This enables us to develop several efficient algorithms based on block coordinate descent methods with a problem specific acceleration. We clarify that one such algorithm exactly coincides with the conventional IVA with OC, and also explain that the other newly developed algorithms are faster than it. Experimental results show the improved computational load of the new algorithms compared to the conventional methods. In particular, a new algorithm specialized for K=1K = 1 outperforms the others.Comment: To appear at the 45th International Conference on Acoustics, Speech, and Signal Processing (ICASSP 2020

    NoisyILRMA: Diffuse-Noise-Aware Independent Low-Rank Matrix Analysis for Fast Blind Source Extraction

    Full text link
    In this paper, we address the multichannel blind source extraction (BSE) of a single source in diffuse noise environments. To solve this problem even faster than by fast multichannel nonnegative matrix factorization (FastMNMF) and its variant, we propose a BSE method called NoisyILRMA, which is a modification of independent low-rank matrix analysis (ILRMA) to account for diffuse noise. NoisyILRMA can achieve considerably fast BSE by incorporating an algorithm developed for independent vector extraction. In addition, to improve the BSE performance of NoisyILRMA, we propose a mechanism to switch the source model with ILRMA-like nonnegative matrix factorization to a more expressive source model during optimization. In the experiment, we show that NoisyILRMA runs faster than a FastMNMF algorithm while maintaining the BSE performance. We also confirm that the switching mechanism improves the BSE performance of NoisyILRMA.Comment: 5 pages, 3 figures, accepted for European Signal Processing Conference 2023 (EUSIPCO 2023

    ISS2: An Extension of Iterative Source Steering Algorithm for Majorization-Minimization-Based Independent Vector Analysis

    Full text link
    A majorization-minimization (MM) algorithm for independent vector analysis optimizes a separation matrix W=[w1,…,wm]h∈CmΓ—mW = [w_1, \ldots, w_m]^h \in \mathbb{C}^{m \times m} by minimizing a surrogate function of the form L(W)=βˆ‘i=1mwihViwiβˆ’log⁑∣det⁑W∣2\mathcal{L}(W) = \sum_{i = 1}^m w_i^h V_i w_i - \log | \det W |^2, where m∈Nm \in \mathbb{N} is the number of sensors and positive definite matrices V1,…,Vm∈CmΓ—mV_1,\ldots,V_m \in \mathbb{C}^{m \times m} are constructed in each MM iteration. For mβ‰₯3m \geq 3, no algorithm has been found to obtain a global minimum of L(W)\mathcal{L}(W). Instead, block coordinate descent (BCD) methods with closed-form update formulas have been developed for minimizing L(W)\mathcal{L}(W) and shown to be effective. One such BCD is called iterative projection (IP) that updates one or two rows of WW in each iteration. Another BCD is called iterative source steering (ISS) that updates one column of the mixing matrix A=Wβˆ’1A = W^{-1} in each iteration. Although the time complexity per iteration of ISS is mm times smaller than that of IP, the conventional ISS converges slower than the current fastest IP (called IP2\text{IP}_2) that updates two rows of WW in each iteration. We here extend this ISS to ISS2\text{ISS}_2 that can update two columns of AA in each iteration while maintaining its small time complexity. To this end, we provide a unified way for developing new ISS type methods from which ISS2\text{ISS}_2 as well as the conventional ISS can be immediately obtained in a systematic manner. Numerical experiments to separate reverberant speech mixtures show that our ISS2\text{ISS}_2 converges in fewer MM iterations than the conventional ISS, and is comparable to IP2\text{IP}_2.Comment: Accepted for publication in the 30th European Signal Processing Conference (EUSIPCO 2022

    Count matroids of group-labeled graphs

    Get PDF
    A graph G = (V, E) is called (k, β„“)-sparse if |F| ≀ k|V (F)| βˆ’ β„“ for any nonempty F βŠ† E, where V (F) denotes the set of vertices incident to F. It is known that the family of the edge sets of (k, β„“)-sparse subgraphs forms the family of independent sets of a matroid, called the (k, β„“)-count matroid of G. In this paper we shall investigate lifts of the (k, β„“)- count matroids by using group labelings on the edge set. By introducing a new notion called near-balancedness, we shall identify a new class of matroids whose independence condition is described as a count condition of the form |F| ≀ k|V (F)|βˆ’β„“+αψ (F) for some function αψ determined by a given group labeling ψ on E
    corecore